4 research outputs found

    Data-driven Inverse Optimization with Imperfect Information

    Full text link
    In data-driven inverse optimization an observer aims to learn the preferences of an agent who solves a parametric optimization problem depending on an exogenous signal. Thus, the observer seeks the agent's objective function that best explains a historical sequence of signals and corresponding optimal actions. We focus here on situations where the observer has imperfect information, that is, where the agent's true objective function is not contained in the search space of candidate objectives, where the agent suffers from bounded rationality or implementation errors, or where the observed signal-response pairs are corrupted by measurement noise. We formalize this inverse optimization problem as a distributionally robust program minimizing the worst-case risk that the {\em predicted} decision ({\em i.e.}, the decision implied by a particular candidate objective) differs from the agent's {\em actual} response to a random signal. We show that our framework offers rigorous out-of-sample guarantees for different loss functions used to measure prediction errors and that the emerging inverse optimization problems can be exactly reformulated as (or safely approximated by) tractable convex programs when a new suboptimality loss function is used. We show through extensive numerical tests that the proposed distributionally robust approach to inverse optimization attains often better out-of-sample performance than the state-of-the-art approaches

    Decision making under uncertainty: Robust and data-driven approaches

    No full text
    A wide variety of decision problems in engineering, science and economics involve uncertain parameters whose values are unknown to the decision maker when the decisions are made. Ignoring this uncertainty can lead to inferior solutions that perform poorly in practice. Many decision problems under uncertainty also span across multiple time stages and thus involve adaptive decisions. Such problems are naturally formulated as multi-stage stochastic programs. Despite its wide applicability, stochastic programming suffers from three major shortcomings. Firstly, stochastic programming models assume precise knowledge of the probability distribution of the uncertain parameters, an assumption which may be difficult to justify in practice when only a finite number of historical observations is available. Secondly, stochastic programming models typically minimize expected cost or disutility. This necessitates the evaluation of a multivariate integral, which can be computationally burdensome, particularly in the presence of high-dimensional uncertainty. Thirdly, multi-stage stochastic programs are inherently intractable as they involve infinitely many constraints as well as functional decision variables ranging over an infinite-dimensional space. In addition, many practical problems also involve integer decision variables, which further aggravates the complexity of even finding a feasible solution. The first main objective of this thesis is to formulate and analyze distributionally robust optimization models that overcome the aforementioned shortcomings. In distributionally robust optimization we seek decisions that perform best in view of the worst-case distribution from within some family of distributions of the uncertain parameters. Distributionally robust models thus relax the unrealistic assumption of a known probability distribution. Instead, they rely on the availability of an ambiguity set, that is, a family of distributions consistent with the decision maker's prior information. By employing the classical duality theory for moment problems, distributionally robust optimization problems can be reduced to finite dimensional conic programs, which are often computationally tractable. In this thesis, we study a distributionally robust variant of the multi-product newsvendor problem. While the arising formulation is proven to be NP-hard due to the presence of a sum of maxima term, we propose a tractable conservative approximation in the form of quadratic decision rules. We further study distributionally robust uncertainty quantification and chance constrained programming problems. We introduce a standardized ambiguity set that captures a wide variety of popular ambiguity sets as special cases, and we present for the first time a thorough complexity analysis of distributionally robust uncertainty quantification and chance constrained programming. The second main objective of this thesis is to develop scalable approximations for dynamic optimization problems under uncertainty. For two-stage problems with continuous recourse decisions there exist already some tractable approximation schemes. However, to date hardly any attempts have been undertaken to solve (distributionally) robust dynamic problems with integer recourse decisions. In this thesis, we present a new finite adaptability approximation for two-stage robust binary programs. We show that these problems admit mixed-integer linear programming approximations that are not much harder to solve than their deterministic counterparts. For multi-stage decision problems with continuous recourse decisions, we further propose a data-driven dynamic programming algorithm that allows the decision maker to incorporate the historical observations directly into the solution procedure in an asymptotically consistent manner. We then combine the data-driven method with robust optimization techniques to alleviate the overfitting effects inherent in problems with sparse historical observations.Open Acces

    K-Adaptability in Two-Stage Robust Binary Programming

    No full text
    Over the last two decades, robust optimization has emerged as a computationally attractive approach to formulate and solve single-stage decision problems affected by uncertainty. More recently, robust optimization has been successfully applied to multi-stage problems with continuous recourse. This paper takes a step towards extending the robust optimization methodology to problems with integer recourse, which have largely resisted solution so far. To this end, we approximate two-stage robust integer programs by their corresponding K-adaptability problems, in which the decision maker pre-commits to K second-stage policies here-and-now and implements the best of these policies once the uncertain parameters are observed. We study the approximation quality and the computational complexity of the K-adaptability problem, and we propose two mixed-integer linear programming reformulations that can be solved with off-the-shelf software. We demonstrate the effectiveness of our reformulations for stylized instances of supply chain design, vertex packing, route planning and capital budgeting problems

    Data-Driven Inverse Optimization with Incomplete Information

    No full text
    In data-driven inverse optimization an observer aims to learn the preferences of an agent who solves a parametric optimization problem depending on an exogenous signal. Thus, the observer seeks the agent's objective function that best explains a historical sequence of signals and corresponding optimal actions. We formalize this inverse optimization problem as a distributionally robust program minimizing the worst-case risk that the estimated decision (i.e., the decision implied by a particular candidate objective) differs from the agent's actual response to a random signal. We show that our framework offers attractive out-of-sample performance guarantees for different prediction errors and that the emerging inverse optimization problems can be reformulated as (or approximated by) tractable convex programs when the prediction error is measured in the space of objective values. A main strength of the proposed approach is that it naturally generalizes to situations where the observer has imperfect information, e.g., when the agent's true objective function is not contained in the space of candidate objectives, when the agent suffers from bounded rationality or implementation errors, or when the observed signal-response pairs are corrupted by measurement noise
    corecore